Goto

Collaborating Authors

 long beach


Dozens of cargo containers fall off vessel at Port of Long Beach. Investigators search for answers

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Dozens of cargo containers fall off vessel at Port of Long Beach. A boat uses jets of water to corral shipping containers that fell off a cargo vessel Tuesday at the Port of Long Beach. Voice comes from the use of AI. Please report any issues or inconsistencies here .


No more fireworks? Big change coming to 4th of July at Pasadena's Rose Bowl

Los Angeles Times

Marking the end of a longtime tradition, the Fourth of July celebration at the Rose Bowl in Pasadena will not feature a fireworks show this year. Instead, there will be a drone show. The move comes as some venues have switched from fireworks to drone shows -- in which a fleet of drones performs a choreographed light show -- to celebrate the 4th of July. But drone shows have fallen flat for some. Notably Redondo Beach and Laguna Beach switched back to fireworks after trying out drone shows, and some promoters of fireworks shows have voiced criticism over efforts to transition to drone shows.


Development of an End-to-end Machine Learning System with Application to In-app Purchases

Varelas, Dionysios, Bonan, Elena, Anderson, Lewis, Englesson, Anders, Åhrling, Christoffer, Chmielewski-Anders, Adrian

arXiv.org Artificial Intelligence

Machine learning (ML) systems have become vital in the mobile gaming industry. Companies like King have been using them in production to optimize various parts of the gaming experience. One important area is in-app purchases: purchases made in the game by players in order to enhance and customize their gameplay experience. In this work we describe how we developed an ML system in order to predict when a player is expected to make their next in-app purchase. These predictions are used to present offers to players. We briefly describe the problem definition, modeling approach and results and then, in considerable detail, outline the end-to-end ML system. We conclude with a reflection on challenges encountered and plans for future work.


Optimizing Tensor Computation Graphs with Equality Saturation and Monte Carlo Tree Search

Hartmann, Jakob, He, Guoliang, Yoneki, Eiko

arXiv.org Artificial Intelligence

The real-world effectiveness of deep neural networks often depends on their latency, thereby necessitating optimization techniques that can reduce a model's inference time while preserving its performance. One popular approach is to sequentially rewrite the input computation graph into an equivalent but faster one by replacing individual subgraphs. This approach gives rise to the so-called phase-ordering problem in which the application of one rewrite rule can eliminate the possibility to apply an even better one later on. Recent work has shown that equality saturation, a technique from compiler optimization, can mitigate this issue by first building an intermediate representation (IR) that efficiently stores multiple optimized versions of the input program before extracting the best solution in a second step. In practice, however, memory constraints prevent the IR from capturing all optimized versions and thus reintroduce the phase-ordering problem in the construction phase. In this paper, we present a tensor graph rewriting approach that uses Monte Carlo tree search to build superior IRs by identifying the most promising rewrite rules. We also introduce a novel extraction algorithm that can provide fast and accurate runtime estimates of tensor programs represented in an IR. Our approach improves the inference speedup of neural networks by up to 11% compared to existing methods.


Pilot program offers Long Beach homeowners up to 250,000 in low-interest loans to build ADUs

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Long Beach's Backyard Builders Program uses one-time funding that will provide as many as 10 homeowners low-to zero-interest loans of up to $250,000 to build Accessory Dwelling Units, or ADUs, on their lots. Eager to boost the supply of affordable housing, city officials in Long Beach devised a program that could help a limited number of homeowners build an extra unit on their land. But before they could launch it, they had to decide what to call it. "We've been playing with a name for a while," Mayor Rex Richardson said, noting that a news release touting the program had been delayed days because of christening purposes.


All in One: Multi-task Prompting for Graph Neural Networks

Sun, Xiangguo, Cheng, Hong, Li, Jia, Liu, Bo, Guan, Jihong

arXiv.org Artificial Intelligence

Recently, ''pre-training and fine-tuning'' has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph annotations from each application. However, graph tasks with node level, edge level, and graph level are far diversified, making the pre-training pretext often incompatible with these multiple tasks. This gap may even cause a ''negative transfer'' to the specific application, leading to poor results. Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks. In this paper, we propose a novel multi-task prompting method for graph models. Specifically, we first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern. In this way, the prompting idea from NLP can be seamlessly introduced to the graph area. Then, to further narrow the gap between various graph tasks and state-of-the-art pre-training strategies, we further study the task space of various graph applications and reformulate downstream problems to the graph-level task. Afterward, we introduce meta-learning to efficiently learn a better initialization for the multi-task prompt of graphs so that our prompting framework can be more reliable and general for different tasks. We conduct extensive experiments, results from which demonstrate the superiority of our method.


Partial-label Learning with Mixed Closed-set and Open-set Out-of-candidate Examples

He, Shuo, Feng, Lei, Yang, Guowu

arXiv.org Artificial Intelligence

Partial-label learning (PLL) relies on a key assumption that the true label of each training example must be in the candidate label set. This restrictive assumption may be violated in complex real-world scenarios, and thus the true label of some collected examples could be unexpectedly outside the assigned candidate label set. In this paper, we term the examples whose true label is outside the candidate label set OOC (out-of-candidate) examples, and pioneer a new PLL study to learn with OOC examples. We consider two types of OOC examples in reality, i.e., the closed-set/open-set OOC examples whose true label is inside/outside the known label space. To solve this new PLL problem, we first calculate the wooden cross-entropy loss from candidate and non-candidate labels respectively, and dynamically differentiate the two types of OOC examples based on specially designed criteria. Then, for closed-set OOC examples, we conduct reversed label disambiguation in the non-candidate label set; for open-set OOC examples, we leverage them for training by utilizing an effective regularization strategy that dynamically assigns random candidate labels from the candidate label set. In this way, the two types of OOC examples can be differentiated and further leveraged for model training. Extensive experiments demonstrate that our proposed method outperforms state-of-the-art PLL methods.


TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting

Ekambaram, Vijay, Jati, Arindam, Nguyen, Nam, Sinthong, Phanwadee, Kalagnanam, Jayant

arXiv.org Artificial Intelligence

Transformers have gained popularity in time series forecasting for their ability to capture long-sequence interactions. However, their high memory and computing requirements pose a critical bottleneck for long-term forecasting. To address this, we propose TSMixer, a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules for multivariate forecasting and representation learning on patched time series. Inspired by MLP-Mixer's success in computer vision, we adapt it for time series, addressing challenges and introducing validated components for enhanced accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a novel Hybrid channel modeling and infusion of a simple gating approach to effectively handle noisy channel interactions and generalization across diverse datasets. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X). The source code of our model is officially released as PatchTSMixer in the HuggingFace. Model: https://huggingface.co/docs/transformers/main/en/model_doc/patchtsmixer Examples: https://github.com/ibm/tsfm/#notebooks-links


Discovering Dynamic Causal Space for DAG Structure Learning

Liu, Fangfu, Ma, Wenchang, Zhang, An, Wang, Xiang, Duan, Yueqi, Chua, Tat-Seng

arXiv.org Machine Learning

Discovering causal structure from purely observational data (i.e., causal discovery), aiming to identify causal relationships among variables, is a fundamental task in machine learning. The recent invention of differentiable score-based DAG learners is a crucial enabler, which reframes the combinatorial optimization problem into a differentiable optimization with a DAG constraint over directed graph space. Despite their great success, these cutting-edge DAG learners incorporate DAG-ness independent score functions to evaluate the directed graph candidates, lacking in considering graph structure. As a result, measuring the data fitness alone regardless of DAG-ness inevitably leads to discovering suboptimal DAGs and model vulnerabilities. Towards this end, we propose a dynamic causal space for DAG structure learning, coined CASPER, that integrates the graph structure into the score function as a new measure in the causal space to faithfully reflect the causal distance between estimated and ground truth DAG. CASPER revises the learning process as well as enhances the DAG structure learning via adaptive attention to DAG-ness. Grounded by empirical visualization, CASPER, as a space, satisfies a series of desired properties, such as structure awareness and noise robustness. Extensive experiments on both synthetic and real-world datasets clearly validate the superiority of our CASPER over the state-of-the-art causal discovery methods in terms of accuracy and robustness.


Towards Aligned Canonical Correlation Analysis: Preliminary Formulation and Proof-of-Concept Results

Cheng, Biqian, Papalexakis, Evangelos E., Chen, Jia

arXiv.org Machine Learning

Canonical Correlation Analysis (CCA) has been widely applied to jointly embed multiple views of data in a maximally correlated latent space. However, the alignment between various data perspectives, which is required by traditional approaches, is unclear in many practical cases. In this work we propose a new framework Aligned Canonical Correlation Analysis (ACCA), to address this challenge by iteratively solving the alignment and multi-view embedding.